73 research outputs found
A Predual Proximal Point Algorithm solving a Non Negative Basis Pursuit Denoising model
International audienceThis paper develops an implementation of a Predual Proximal Point Algorithm (PPPA) solving a Non Negative Basis Pursuit Denoising model. The model imposes a constraint on the l2 norm of the residual, instead of penalizing it. The PPPA solves the predual of the problem with a Proximal Point Algorithm (PPA). Moreover, the minimization that needs to be performed at each iteration of PPA is solved with a dual method. We can prove that these dual variables converge to a solution of the initial problem. Our analysis proves that we turn a constrained non differentiable con- vex problem into a short sequence of nice concave maximization problems. By nice, we mean that the functions which are maximized are differen- tiable and their gradient is Lipschitz. The algorithm is easy to implement, easier to tune and more general than the algorithms found in the literature. In particular, it can be ap- plied to the Basis Pursuit Denoising (BPDN) and the Non Negative Basis Pursuit Denoising (NNBPDN) and it does not make any assumption on the dictionary. We prove its convergence to the set of solutions of the model and provide some convergence rates. Experiments on image approximation show that the performances of the PPPA are at the current state of the art for the BPDN
A Two-stage Classification Method for High-dimensional Data and Point Clouds
High-dimensional data classification is a fundamental task in machine
learning and imaging science. In this paper, we propose a two-stage multiphase
semi-supervised classification method for classifying high-dimensional data and
unstructured point clouds. To begin with, a fuzzy classification method such as
the standard support vector machine is used to generate a warm initialization.
We then apply a two-stage approach named SaT (smoothing and thresholding) to
improve the classification. In the first stage, an unconstraint convex
variational model is implemented to purify and smooth the initialization,
followed by the second stage which is to project the smoothed partition
obtained at stage one to a binary partition. These two stages can be repeated,
with the latest result as a new initialization, to keep improving the
classification quality. We show that the convex model of the smoothing stage
has a unique solution and can be solved by a specifically designed primal-dual
algorithm whose convergence is guaranteed. We test our method and compare it
with the state-of-the-art methods on several benchmark data sets. The
experimental results demonstrate clearly that our method is superior in both
the classification accuracy and computation speed for high-dimensional data and
point clouds.Comment: 21 pages, 4 figure
Matching Pursuit Shrinkage in Hilbert Spaces
International audienceIn this paper, we study a variant of the Matching Pursuit named Matching Pursuit Shrinkage. Similarly to the Matching Pursuit it seeks for an approximation of a datum living in a Hilbert space by a sparse linear expansion in an enumerable set of atoms. The difference with the usual Matching Pursuit is that, once an atom has been selected, we do not erase all the information along the direction of this atom. Doing so, we can evolve slowly along that direction. The goal is to attenuate the negative impact of bad atom selections. We analyse the link between the shrinkage function used by the algorithm and the fact that the result belongs to an lp space
DSFNet: Convolutional Encoder-Decoder Architecture Combined Dual-GCN and Stand-alone Self-attention by Fast Normalized Fusion for Polyps Segmentation
In the past few decades, deep learning technology has been widely used in
medical image segmentation and has made significant breakthroughs in the fields
of liver and liver tumor segmentation, brain and brain tumor segmentation,
video disc segmentation, heart image segmentation, and so on. However, the
segmentation of polyps is still a challenging task since the surface of the
polyps is flat and the color is very similar to that of surrounding tissues.
Thus, It leads to the problems of the unclear boundary between polyps and
surrounding mucosa, local overexposure, and bright spot reflection. To counter
this problem, this paper presents a novel U-shaped network, namely DSFNet,
which effectively combines the advantages of Dual-GCN and self-attention
mechanisms. First, we introduce a feature enhancement block module based on
Dual-GCN module as an attention mechanism to enhance the feature extraction of
local spatial and structural information with fine granularity. Second, the
stand-alone self-attention module is designed to enhance the integration
ability of the decoding stage model to global information. Finally, the Fast
Normalized Fusion method with trainable weights is used to efficiently fuse the
corresponding three feature graphs in encoding, bottleneck, and decoding
blocks, thus promoting information transmission and reducing the semantic gap
between encoder and decoder. Our model is tested on two public datasets
including Endoscene and Kvasir-SEG and compared with other state-of-the-art
models. Experimental results show that the proposed model surpasses other
competitors in many indicators, such as Dice, MAE, and IoU. In the meantime,
ablation studies are also conducted to verify the efficacy and effectiveness of
each module. Qualitative and quantitative analysis indicates that the proposed
model has great clinical significance.Comment: 10 pages, 6 figures, 3 table
Total Variation Restoration of Images Corrupted by Poisson Noise with Iterated Conditional Expectations
International audienceInterpreting the celebrated Rudin-Osher-Fatemi (ROF) model in a Bayesian framework has led to interesting new variants for Total Variation image denoising in the last decade. The Posterior Mean variant avoids the so-called staircasing artifact of the ROF model but is computationally very expensive. Another recent variant, called TV-ICE (for Iterated Conditional Expectation), delivers very similar images but uses a much faster fixed-point algorithm. In the present work, we consider the TV-ICE approach in the case of a Poisson noise model. We derive an explicit form of the recursion operator, and show linear convergence of the algorithm, as well as the absence of staircasing effect. We also provide a numerical algorithm that carefully handles precision and numerical overflow issues, and show experiments that illustrate the interest of this Poisson TV-ICE variant
Randomly Projected Convex Clustering Model: Motivation, Realization, and Cluster Recovery Guarantees
In this paper, we propose a randomly projected convex clustering model for
clustering a collection of high dimensional data points in
with hidden clusters. Compared to the convex clustering model for
clustering original data with dimension , we prove that, under some mild
conditions, the perfect recovery of the cluster membership assignments of the
convex clustering model, if exists, can be preserved by the randomly projected
convex clustering model with embedding dimension ,
where is some given parameter. We further prove that the
embedding dimension can be improved to be , which is
independent of the number of data points. Extensive numerical experiment
results will be presented in this paper to demonstrate the robustness and
superior performance of the randomly projected convex clustering model. The
numerical results presented in this paper also demonstrate that the randomly
projected convex clustering model can outperform the randomly projected K-means
model in practice
- âŠ